Learning Partial Models for Hierarchical Planning
نویسندگان
چکیده
AI planning research typically assumes that complete action models are given. On the other hand, popular approaches in reinforcement learning such as Q-learning completely eschew models and planning. Neither of these approaches is satisfactory to achieve robust human-level AI that includes planning and learning in rich structured domains. In this paper, we introduce the idea of planning with partial models. While complete action models may be exponentially large, some domains may still have polynomial size partial models which are adequate for hierarchical planning. We describe algorithms for planning with partial models in the context of serializable domains, and for learning them from observation. Empirically, we demonstrate the effectiveness of partial models for learning and hierarchical planning in versions of the taxi domain.
منابع مشابه
Fuzzy Hierarchical Location-Allocation Models for Congested Systems
There exist various service systems that have hierarchical structure. In hierarchical service networks, facilities at different levels provide different types of services. For example, in health care systems, general centers provide low-level services such as primary health care services, while the specialized hospitals provide high-level services. Because of demand congestion in service networ...
متن کاملLearning HTN Method Preconditions and Action Models from Partial Observations
To apply hierarchical task network (HTN) planning to real-world planning problems, one needs to encode the HTN schemata and action models beforehand. However, acquiring such domain knowledge is difficult and time-consuming because the HTN domain definition involves a significant knowledge-engineering effort. A system that can learn the HTN planning domain knowledge automatically would save time...
متن کاملLearning Applicability Conditions in AI Planning from Partial Observations
AI planning has become more and more important in many real-world domains such as military applications and intelligent scheduling. However, planning systems require complete specifications of domain models, which can be difficult to encode, even for domain experts. Thus, research on effective and efficient methods to construct domain models or applicability conditions for planning automaticall...
متن کاملMonte Carlo Hierarchical Model Learning
Reinforcement learning (RL) is a well-established paradigm for enabling autonomous agents to learn from experience. To enable RL to scale to any but the smallest domains, it is necessary to make use of abstraction and generalization of the state-action space, for example with a factored representation. However, to make effective use of such a representation, it is necessary to determine which s...
متن کاملReinforcement Learning with a Hierarchy of Abstract Models
Reinforcement learning (RL) algorithms have traditionally been thought of as trial and error learning methods that use actual control experience to incrementally improve a control policy. Sutton's DYNA architecture demonstrated that RL algorithms can work as well using simulated experience from an environment model, and that the resulting computation was similar to doing one-step lookahead plan...
متن کامل